109 research outputs found
Physics-based basis functions for low-dimensional representation of the refractive index in the high energy limit
The relationship between the refractive index decrement, , and the
real part of the atomic form factor, , is used to derive a simple
polynomial functional form for far from the K-edge of the element.
The functional form, motivated by the underlying physics, follows an infinite
power sum, with most of the energy dependence captured by a single term,
. The derived functional form shows excellent agreement with theoretical
and experimentally recorded values. This work helps reduce the dimensionality
of the refractive index across the energy range of x-ray radiation for
efficient forward modeling and formulation of a well-posed inverse problem in
propagation-based polychromatic phase-contrast computed tomography
Lose The Views: Limited Angle CT Reconstruction via Implicit Sinogram Completion
Computed Tomography (CT) reconstruction is a fundamental component to a wide
variety of applications ranging from security, to healthcare. The classical
techniques require measuring projections, called sinograms, from a full
180 view of the object. This is impractical in a limited angle
scenario, when the viewing angle is less than 180, which can occur due
to different factors including restrictions on scanning time, limited
flexibility of scanner rotation, etc. The sinograms obtained as a result, cause
existing techniques to produce highly artifact-laden reconstructions. In this
paper, we propose to address this problem through implicit sinogram completion,
on a challenging real world dataset containing scans of common checked-in
luggage. We propose a system, consisting of 1D and 2D convolutional neural
networks, that operates on a limited angle sinogram to directly produce the
best estimate of a reconstruction. Next, we use the x-ray transform on this
reconstruction to obtain a "completed" sinogram, as if it came from a full
180 measurement. We feed this to standard analytical and iterative
reconstruction techniques to obtain the final reconstruction. We show with
extensive experimentation that this combined strategy outperforms many
competitive baselines. We also propose a measure of confidence for the
reconstruction that enables a practitioner to gauge the reliability of a
prediction made by our network. We show that this measure is a strong indicator
of quality as measured by the PSNR, while not requiring ground truth at test
time. Finally, using a segmentation experiment, we show that our reconstruction
preserves the 3D structure of objects effectively.Comment: Spotlight presentation at CVPR 201
Generative Adversarial Networks Based Scene Generation on Indian Driving Dataset
The rate of advancement in the field of artificial intelligence (AI) has drastically increased over the past twenty years or so. From AI models that can classify every object in an image to realistic chatbots, the signs of progress can be found in all fields. This work focused on tackling a relatively new problem in the current scenario-generative capabilities of AI. While the classification and prediction models have matured and entered the mass market across the globe, generation through AI is still in its initial stages. Generative tasks consist of an AI model learning the features of a given input and using these learned values to generate completely new output values that were not originally part of the input dataset. The most common input type given to generative models are images. The most popular architectures for generative models are autoencoders and generative adversarial networks (GANs). Our study aimed to use GANs to generate realistic images from a purely semantic representation of a scene. While our model can be used on any kind of scene, we used the Indian Driving Dataset to train our model. Through this work, we could arrive at answers to the following questions: (1) the scope of GANs in interpreting and understanding textures and variables in complex scenes; (2) the application of such a model in the field of gaming and virtual reality; (3) the possible impact of generating realistic deep fakes on society
Generative Adversarial Networks Based Scene Generation on Indian Driving Dataset
The rate of advancement in the field of artificial intelligence (AI) has drastically increased over the past twenty years or so. From AI models that can classify every object in an image to realistic chatbots, the signs of progress can be found in all fields. This work focused on tackling a relatively new problem in the current scenario-generative capabilities of AI. While the classification and prediction models have matured and entered the mass market across the globe, generation through AI is still in its initial stages. Generative tasks consist of an AI model learning the features of a given input and using these learned values to generate completely new output values that were not originally part of the input dataset. The most common input type given to generative models are images. The most popular architectures for generative models are autoencoders and generative adversarial networks (GANs). Our study aimed to use GANs to generate realistic images from a purely semantic representation of a scene. While our model can be used on any kind of scene, we used the Indian Driving Dataset to train our model. Through this work, we could arrive at answers to the following questions: (1) the scope of GANs in interpreting and understanding textures and variables in complex scenes; (2) the application of such a model in the field of gaming and virtual reality; (3) the possible impact of generating realistic deep fakes on society
DOLCE: A Model-Based Probabilistic Diffusion Framework for Limited-Angle CT Reconstruction
Limited-Angle Computed Tomography (LACT) is a non-destructive evaluation
technique used in a variety of applications ranging from security to medicine.
The limited angle coverage in LACT is often a dominant source of severe
artifacts in the reconstructed images, making it a challenging inverse problem.
We present DOLCE, a new deep model-based framework for LACT that uses a
conditional diffusion model as an image prior. Diffusion models are a recent
class of deep generative models that are relatively easy to train due to their
implementation as image denoisers. DOLCE can form high-quality images from
severely under-sampled data by integrating data-consistency updates with the
sampling updates of a diffusion model, which is conditioned on the transformed
limited-angle data. We show through extensive experimentation on several
challenging real LACT datasets that, the same pre-trained DOLCE model achieves
the SOTA performance on drastically different types of images. Additionally, we
show that, unlike standard LACT reconstruction methods, DOLCE naturally enables
the quantification of the reconstruction uncertainty by generating multiple
samples consistent with the measured data.Comment: 29 pages, 21 figure
- …